155 research outputs found

    Hidden Markov Models and their Application for Predicting Failure Events

    Full text link
    We show how Markov mixed membership models (MMMM) can be used to predict the degradation of assets. We model the degradation path of individual assets, to predict overall failure rates. Instead of a separate distribution for each hidden state, we use hierarchical mixtures of distributions in the exponential family. In our approach the observation distribution of the states is a finite mixture distribution of a small set of (simpler) distributions shared across all states. Using tied-mixture observation distributions offers several advantages. The mixtures act as a regularization for typically very sparse problems, and they reduce the computational effort for the learning algorithm since there are fewer distributions to be found. Using shared mixtures enables sharing of statistical strength between the Markov states and thus transfer learning. We determine for individual assets the trade-off between the risk of failure and extended operating hours by combining a MMMM with a partially observable Markov decision process (POMDP) to dynamically optimize the policy for when and how to maintain the asset.Comment: Will be published in the proceedings of ICCS 2020; @Booklet{EasyChair:3183, author = {Paul Hofmann and Zaid Tashman}, title = {Hidden Markov Models and their Application for Predicting Failure Events}, howpublished = {EasyChair Preprint no. 3183}, year = {EasyChair, 2020}

    On optimal extended row distance profile

    Get PDF
    In this paper, we investigate extended row distances of Unit Memory (UM) convolutional codes. In particular, we derive upper and lower bounds for these distances and moreover present a concrete construction of a UM convolutional code that almost achieves the derived upper bounds. The generator matrix of these codes is built by means of a particular class of matrices, called superregular matrices. We actually conjecture that the construction presented is optimal with respect to the extended row distances as it achieves the maximum extended row distances possible. This in particular implies that the upper bound derived is not completely tight. The results presented in this paper further develop the line of research devoted to the distance properties of convolutional codes which has been mainly focused on the notions of free distance and column distance. Some open problems are left for further research

    An iterative algorithm for parametrization of shortest length shift registers over finite rings

    Get PDF
    The construction of shortest feedback shift registers for a finite sequence S_1,...,S_N is considered over the finite ring Z_{p^r}. A novel algorithm is presented that yields a parametrization of all shortest feedback shift registers for the sequence of numbers S_1,...,S_N, thus solving an open problem in the literature. The algorithm iteratively processes each number, starting with S_1, and constructs at each step a particular type of minimal Gr\"obner basis. The construction involves a simple update rule at each step which leads to computational efficiency. It is shown that the algorithm simultaneously computes a similar parametrization for the reciprocal sequence S_N,...,S_1.Comment: Submitte

    Complexity of Discrete Energy Minimization Problems

    Full text link
    Discrete energy minimization is widely-used in computer vision and machine learning for problems such as MAP inference in graphical models. The problem, in general, is notoriously intractable, and finding the global optimal solution is known to be NP-hard. However, is it possible to approximate this problem with a reasonable ratio bound on the solution quality in polynomial time? We show in this paper that the answer is no. Specifically, we show that general energy minimization, even in the 2-label pairwise case, and planar energy minimization with three or more labels are exp-APX-complete. This finding rules out the existence of any approximation algorithm with a sub-exponential approximation ratio in the input size for these two problems, including constant factor approximations. Moreover, we collect and review the computational complexity of several subclass problems and arrange them on a complexity scale consisting of three major complexity classes -- PO, APX, and exp-APX, corresponding to problems that are solvable, approximable, and inapproximable in polynomial time. Problems in the first two complexity classes can serve as alternative tractable formulations to the inapproximable ones. This paper can help vision researchers to select an appropriate model for an application or guide them in designing new algorithms.Comment: ECCV'16 accepte

    On the complexity of arithmetic secret sharing

    Get PDF
    Since the mid 2000s, asymptotically-good strongly-multiplicative linear (ramp) secret sharing schemes over a fixed finite field have turned out as a central theoretical primitive in numerous constant-communication-rate results in multi-party cryptographic scenarios, and, surprisingly, in two-party cryptography as well. Known constructions of this most powerful class of arithmetic secret sharing schemes all rely heavily on algebraic geometry (AG), i.e., on dedicated AG codes based on asymptotically good towers of algebraic function fields defined over finite fields. It is a well-known open question since the first (explicit) constructions of such schemes appeared in CRYPTO 2006 whether the use of “heavy machinery” can be avoided here. i.e., the question is whether the mere existence of such schemes can also be proved by “elementary” techniques only (say, from classical algebraic coding theory), even disregarding effective construction. So far, there is no progress. In this paper we show the theoretical result that, (1) no matter whether this open question has an affirmative answer or not, these schemes can be constructed explicitly by elementary algorithms defined in terms of basic algebraic coding theory. This pertains to all relevant operations associated to such schemes, including, notably, the generation of an instance for a given number of players n, as well as error correction in the presence of corrupt shares. We further show that (2) the algorithms are quasi-linear time (in n); this is (asymptotically) significantly more efficient than the known constructions. That said, the analysis of the mere termination of these algorithms does still rely on algebraic geometry, in the sense that it requires “blackbox application” of suitable existence results for these schemes. Our method employs a nontrivial, novel adaptation of a classical (and ubiquitous) paradigm from coding theory that enables transformation of existence results on asymptotically good codes into explicit construction of such codes via concatenation, at some constant loss in parameters achieved. In a nutshell, our generating idea is to combine a cascade of explicit but “asymptotically-bad-yet-good-enough schemes” with an asymptotically good one in such a judicious way that the latter can be selected with exponentially small number of players in that of the compound scheme. This opens the door t

    Preparation of name and address data for record linkage using hidden Markov models

    Get PDF
    BACKGROUND: Record linkage refers to the process of joining records that relate to the same entity or event in one or more data collections. In the absence of a shared, unique key, record linkage involves the comparison of ensembles of partially-identifying, non-unique data items between pairs of records. Data items with variable formats, such as names and addresses, need to be transformed and normalised in order to validly carry out these comparisons. Traditionally, deterministic rule-based data processing systems have been used to carry out this pre-processing, which is commonly referred to as "standardisation". This paper describes an alternative approach to standardisation, using a combination of lexicon-based tokenisation and probabilistic hidden Markov models (HMMs). METHODS: HMMs were trained to standardise typical Australian name and address data drawn from a range of health data collections. The accuracy of the results was compared to that produced by rule-based systems. RESULTS: Training of HMMs was found to be quick and did not require any specialised skills. For addresses, HMMs produced equal or better standardisation accuracy than a widely-used rule-based system. However, acccuracy was worse when used with simpler name data. Possible reasons for this poorer performance are discussed. CONCLUSION: Lexicon-based tokenisation and HMMs provide a viable and effort-effective alternative to rule-based systems for pre-processing more complex variably formatted data such as addresses. Further work is required to improve the performance of this approach with simpler data such as names. Software which implements the methods described in this paper is freely available under an open source license for other researchers to use and improve

    Factor graph based detection approach for high-mobility OFDM systems with large FFT modes

    Get PDF
    In this article, a novel detector design is proposed for orthogonal frequency division multiplexing (OFDM) systems over frequency selective and time varying channels. Namely, we focus on systems with large OFDM symbol lengths where design and complexity constraints have to be taken into account and many of the existing ICI reduction techniques can not be applied. We propose a factor graph (FG) based approach for maximum a posteriori (MAP) symbol detection which exploits the frequency diversity introduced by the ICI in the OFDM symbol. The proposed algorithm provides high diversity orders allowing to outperform the free-ICI performance in high-mobility scenarios with an inherent parallel structure suitable for large OFDM block sizes. The performance of the mentioned near-optimal detection strategy is analyzed over a general bit-interleaved coded modulation (BICM) system applying low-density parity-check (LDPC) codes. The inclusion of pilot symbols is also considered in order to analyze how they assist the detection process

    Between a Rock and a Hard Place: Habitat Selection in Female-Calf Humpback Whale (Megaptera novaeangliae) Pairs on the Hawaiian Breeding Grounds

    Get PDF
    The Au'au Channel between the islands of Maui and Lanai, Hawaii comprises critical breeding habitat for humpback whales (Megaptera novaeangliae) of the Central North Pacific stock. However, like many regions where marine mega-fauna gather, these waters are also the focus of a flourishing local eco-tourism and whale watching industry. Our aim was to establish current trends in habitat preference in female-calf humpback whale pairs within this region, focusing specifically on the busy, eastern portions of the channel. We used an equally-spaced zigzag transect survey design, compiled our results in a GIS model to identify spatial trends and calculated Neu's Indices to quantify levels of habitat use. Our study revealed that while mysticete female-calf pairs on breeding grounds typically favor shallow, inshore waters, female-calf pairs in the Au'au Channel avoided shallow waters (<20 m) and regions within 2 km of the shoreline. Preferred regions for female-calf pairs comprised water depths between 40–60 m, regions of rugged bottom topography and regions that lay between 4 and 6 km from a small boat harbor (Lahaina Harbor) that fell within the study area. In contrast to other humpback whale breeding grounds, there was only minimal evidence of typical patterns of stratification or segregation according to group composition. A review of habitat use by maternal females across Hawaiian waters indicates that maternal habitat choice varies between localities within the Hawaiian Islands, suggesting that maternal females alter their use of habitat according to locally varying pressures. This ability to respond to varying environments may be the key that allows wildlife species to persist in regions where human activity and critical habitat overlap

    Fast bounded-distance decoding of the Nordstrom-Robinson code

    No full text
    Based on the two-level squaring construction of the Reed-Muller code, a bounded-distance decoding algorithm for the Nordstrom-Robinson code is given. This algorithm involves 199 real operations, which is less than one half of the computational complexity of the known maximum-likelihood decoding algorithms for this code. The algorithm also has exactly the same effective error coefficient as the maximum-likelihood decoding, so that its performance is only degraded by a negligible amount

    Linear Codes and Their Duals Over Artinian Rings

    No full text
    corecore